10,749 research outputs found

    Euthanasia: Human Rights and Inalienability

    Get PDF

    Increasing the density of available pareto optimal solutions

    Get PDF
    The set of available multi-objective optimization algorithms continues to grow. This fact can be partially attributed to their widespread use and applicability. However this increase also suggests several issues remain to be addressed satisfactorily. One such issue is the diversity and the number of solutions available to the decision maker (DM). Even for algorithms very well suited for a particular problem, it is difficult - mainly due to the computational cost - to use a population large enough to ensure the likelihood of obtaining a solution close to the DMs preferences. In this paper we present a novel methodology that produces additional Pareto optimal solutions from a Pareto optimal set obtained at the end run of any multi-objective optimization algorithm. This method, which we refer to as Pareto estimation, is tested against a set of 2 and 3-objective test problems and a 3-objective portfolio optimization problem to illustrate its’ utility for a real-world problem

    Methods for many-objective optimization: an analysis

    Get PDF
    Decomposition-based methods are often cited as the solution to problems related with many-objective optimization. Decomposition-based methods employ a scalarizing function to reduce a many-objective problem into a set of single objective problems, which upon solution yields a good approximation of the set of optimal solutions. This set is commonly referred to as Pareto front. In this work we explore the implications of using decomposition-based methods over Pareto-based methods from a probabilistic point of view. Namely, we investigate whether there is an advantage of using a decomposition-based method, for example using the Chebyshev scalarizing function, over Paretobased methods

    Generalized decomposition and cross entropy methods for many-objective optimization

    Get PDF
    Decomposition-based algorithms for multi-objective optimization problems have increased in popularity in the past decade. Although their convergence to the Pareto optimal front (PF) is in several instances superior to that of Pareto-based algorithms, the problem of selecting a way to distribute or guide these solutions in a high-dimensional space has not been explored. In this work, we introduce a novel concept which we call generalized decomposition. Generalized decomposition provides a framework with which the decision maker (DM) can guide the underlying evolutionary algorithm toward specific regions of interest or the entire Pareto front with the desired distribution of Pareto optimal solutions. Additionally, it is shown that generalized decomposition simplifies many-objective problems by unifying the three performance objectives of multi-objective evolutionary algorithms – convergence to the PF, evenly distributed Pareto optimal solutions and coverage of the entire front – to only one, that of convergence. A framework, established on generalized decomposition, and an estimation of distribution algorithm (EDA) based on low-order statistics, namely the cross-entropy method (CE), is created to illustrate the benefits of the proposed concept for many objective problems. This choice of EDA also enables the test of the hypothesis that low-order statistics based EDAs can have comparable performance to more elaborate EDAs

    Considering Harm and Safety in Youth Mental Health: A Call for Attention and Action

    Get PDF
    The possibility of harm from mental health provision, and in particular harm from youth mental health provision, has been largely overlooked. We contend that if we continue to assume youth mental health services can do no harm, and all that is needed is more services, we continue to risk the possibility that the safety of children and young people is unintentionally compromised. We propose a three level framework for considering harm from youth mental health provision (1. ineffective engagement, 2. ineffective practice and 3. adverse events) and suggest how this framework could be used to support quality improvement in services

    Energy-dependent quenching adjusts the excitation diffusion length to regulate photosynthetic light harvesting

    Full text link
    An important determinant of crop yields is the regulation of photosystem II (PSII) light harvesting by energy-dependent quenching (qE). However, the molecular details of excitation quenching have not been quantitatively connected to the PSII yield, which only emerges on the 100 nm scale of the grana membrane and determines flux to downstream metabolism. Here, we incorporate excitation dissipation by qE into a pigment-scale model of excitation transfer and trapping for a 200 nm x 200 nm patch of the grana membrane. We demonstrate that single molecule measurements of qE are consistent with a weak-quenching regime. Consequently, excitation transport can be rigorously coarse-grained to a 2D random walk with an excitation diffusion length determined by the extent of quenching. A diffusion-corrected lake model substantially improves the PSII yield determined from variable chlorophyll fluorescence measurements and offers an improved model of PSII for photosynthetic metabolism.Comment: 19 pages, 4 figures, 3 supplementary figure

    Statistical Properties of Interacting Bose Gases in Quasi-2D Harmonic Traps

    Full text link
    The analytical probability distribution of the quasi-2D (and purely 2D) ideal and interacting Bose gas are investigated by using a canonical ensemble approach. Using the analytical probability distribution of the condensate, the statistical properties such as the mean occupation number and particle number fluctuations of the condensate are calculated. Researches show that there is a continuous crossover of the statistical properties from a quasi-2D to a purely 2D ideal or interacting gases. Different from the case of a 3D Bose gas, the interaction between atoms changes in a deep way the nature of the particle number fluctuations.Comment: RevTex, 10pages, 4 figures, E-mail: [email protected]

    A High Reliability Asymptotic Approach for Packet Inter-Delivery Time Optimization in Cyber-Physical Systems

    Full text link
    In cyber-physical systems such as automobiles, measurement data from sensor nodes should be delivered to other consumer nodes such as actuators in a regular fashion. But, in practical systems over unreliable media such as wireless, it is a significant challenge to guarantee small enough inter-delivery times for different clients with heterogeneous channel conditions and inter-delivery requirements. In this paper, we design scheduling policies aiming at satisfying the inter-delivery requirements of such clients. We formulate the problem as a risk-sensitive Markov Decision Process (MDP). Although the resulting problem involves an infinite state space, we first prove that there is an equivalent MDP involving only a finite number of states. Then we prove the existence of a stationary optimal policy and establish an algorithm to compute it in a finite number of steps. However, the bane of this and many similar problems is the resulting complexity, and, in an attempt to make fundamental progress, we further propose a new high reliability asymptotic approach. In essence, this approach considers the scenario when the channel failure probabilities for different clients are of the same order, and asymptotically approach zero. We thus proceed to determine the asymptotically optimal policy: in a two-client scenario, we show that the asymptotically optimal policy is a "modified least time-to-go" policy, which is intuitively appealing and easily implementable; in the general multi-client scenario, we are led to an SN policy, and we develop an algorithm of low computational complexity to obtain it. Simulation results show that the resulting policies perform well even in the pre-asymptotic regime with moderate failure probabilities
    • 

    corecore